Type System Support for Floating-Point Computation
نویسنده
چکیده
Floating-point arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: • Store data narrowly, • compute intermediates widely, and • derive properties widely. Further, we describe a typing system for floating point that both supports and is supported by these rules. A single type is established for all intermediate computations. The type describes a precision at least as wide as all inputs to and results from the computation. Picking a single type provides benefits to users, compilers, and interpreters. The type system also extends cleanly to encompass intervals and higher precisions.
منابع مشابه
Half-precision Floating-point Ray Traversal
Fraction (10 bits) Sign (1 bit) Exponent (5 bits) 16-bit floating-point format defined in IEEE 754-2008 standard Storage support on most of the modern CPUs and GPUs Native computation support especially on mobile platforms (Up coming nVidia Pascal desktop GPUs are announced to have native computation support) Pros: Smaller cache footprint (compared to "regular" 32-bit floats) More energy effici...
متن کاملManipulation of Matrices Symbolically
Traditionally, matrix algebra in computer algebra systems is “implemented” in three ways: • numeric explicit computation in a special arithmetic domain: exact rational or integer, highprecision software floating-point, interval, or conventional hardware floating-point. • ‘symbolic’ explicit computation with polynomial or other expression entries, • (implicit) matrix computation with symbols def...
متن کاملHigh-Precision Arithmetic in Mathematical Physics
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. This article discusses the challenge of high-precision computation, ...
متن کاملFloating Point to Fixed Point Conversion of C Code
In processors that do not support floating-point instructions, using fixed-point arithmetic instead of floating-point emulation trades off computation accuracy for execution speed. This trade-off is often profitable. In many cases, like embedded systems, low-cost and speed bounds make it the only acceptable option. We present an environment supporting fixed-point code generation from C programs...
متن کاملUnrestricted Algorithms for Elementary and Special Functions
Floating-point computations are usually performed with fixed precision: the machine used may have “single” or “double” precision floating-point hardware, or on small machines fixed-precision floating-point operations may be implemented by software or firmware. Most high-level languages support only a small number of floating-point precisions, and those which support an arbitrary number usually ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2001